azam moghadam; mohammadreza falsafinejhad; norali farokhi; masoomeh estaji
Abstract
Traditional approaches in educational measurement have some practical and theoretical challenges in demonstrating language competencies and their abilities in assessment candidates' skills and selecting them have been questioned. In order to overcome these restrictions cognitive diagnostic models (CDMs) ...
Read More
Traditional approaches in educational measurement have some practical and theoretical challenges in demonstrating language competencies and their abilities in assessment candidates' skills and selecting them have been questioned. In order to overcome these restrictions cognitive diagnostic models (CDMs) have been introduced and applied. Objective: The purpose of this study was diagnostic analysis of reading comprehension items of a general English language test (PhD entrance exam) to investigate underlying skills of a given test, inspection of model convergence and its fit, diagnostic power of the test and the mastery status of examinees. Method: The study conducted in cognitive diagnostic modeling. The population was all PhD candidates which majored in English teaching, linguistics, translation, and English literature. 2754 examinees were used as a sample. Task analysis, coding and verbal reports were applied to determine underlying skills of the test. Results: In qualitative section, 6 skills including using vocabulary knowledge, using syntactic knowledge, extracting explicit information or scan, drawing inference, connecting and synthesizing and using pragmatic knowledge were investigated. Also, quantitative analyses using non-compensatory reduced fusion model (FM) based on a Monte Carlo Markov chain (MCMC) indicated MCMC convergence and model fit and possibility of application of fusion model in English language's tests. The ability parameters were low for all skills. Using vocabulary knowledge was the simplest skill. The mean of item proportion-correct scores was .42 and the test did not have a high diagnostic power. Discussion and conclusion: Using cognitive diagnostic models in general and fusion model in particular results in achieving more information about tests and examinees' responses and it helps to reach the goal of assessment for learning and classify examinees as masters or non-masters correctly. Key Words: Cognitive Diagnostic Models (CDMs), Non-compensatory Fusion Model (FM), Reading Comprehension's Skills, and English Language
behnam karimi; m Falsafinejad; fariborz dortaj
Volume 2, Issue 6 , January 2012, , Pages 1-23
Abstract
Background: ease in scoring,performingand identity of multiple choice tests has caused that those apply as the essential instruments in large scale assessments. There was intense criticism toward multiple choices. For example, those not perform all of educational goals (those assess low cognitive levels) ...
Read More
Background: ease in scoring,performingand identity of multiple choice tests has caused that those apply as the essential instruments in large scale assessments. There was intense criticism toward multiple choices. For example, those not perform all of educational goals (those assess low cognitive levels) and because of using guess to answering questions. Herein, some people for solving of these problems were suggested that we should increase choices of questions.
Objectives: The objective of this research was the study of effects of number of item choices on psychometric characteristics of test and items and also on estimated ability of subjects in classical test theory and item- response theory (IRT).
Methods: The statistical population was all of high school’ students of Shiraz. That 608 of them were randomly selected as sample group. In order to response to study questions, we used the empirical method and for data collecting we used two language and arithmetic tests that were provided to this goal.
Results: Data analysis indicated that there was no significant effect of item choices on item parameters and the effect of item choices on estimated psychometric characteristics of subjects in different tests is equal. Furthermore, there was difference between estimated parameters in classical test theory and item-response theory (IRT).
Conclusion: After checking assumptions of item response theory (IRT), this was appeared that data have better fitted with two- parameter model and there was no difference between item choices and fitting with model. In addition, there was difference between estimated ability and item choices too.